algorithmic accountablity

Terms from Artificial Intelligence: humans at the heart of algorithms

Algorithmic accountablity concerns the question of who is responsible, legally and ethically, when an algorithm in general, or AI in pareticular, goes wrong. In market-based economies, if companies (and their insurers) know that they will be sued or even be subject to criminal prosecution if the AI fails, then it is assumed they will be procatuve in creating safe and appropriate software. More explicit regulation is usually implemented too slowly for fast miving technology, and may hamper innovation. It may therfore be argued, that ensuring algorithmic accountablity is a more effective and even democratic option.

Used on page 478